2 research outputs found

    Energy Regeneration and Environment Sensing for Robotic Leg Prostheses and Exoskeletons

    Get PDF
    Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, limitations in automated control and energy-efficient actuation have impeded their transition from research laboratories to real-world environments. With regards to control, the current automated locomotion mode recognition systems being developed rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, here a multi-generation environment sensing and classification system powered by computer vision and deep learning was developed to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. To support this initiative, the “ExoNet” database was developed – the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a novel hierarchical labelling architecture. Over a dozen state-of-the-art deep convolutional neural networks were trained and tested on ExoNet for large-scale image classification and automatic feature engineering. The benchmarked CNN architectures and their environment classification predictions were then quantitatively evaluated and compared using an operational metric called “NetScore”, which balances the classification accuracy with the architectural and computational complexities (i.e., important for onboard real-time inference with mobile computing devices). Of the benchmarked CNN architectures, the EfficientNetB0 network achieved the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore. These comparative results can inform the optimal architecture design or selection depending on the desired performance of an environment classification system. With regards to energetics, backdriveable actuators with energy regeneration can improve the energy efficiency and extend the battery-powered operating durations by converting some of the otherwise dissipated energy during negative mechanical work into electrical energy. However, the evaluation and control of these regenerative actuators has focused on steady-state level-ground walking. To encompass real-world community mobility more broadly, here an energy regeneration system, featuring mathematical and computational models of human and wearable robotic systems, was developed to simulate energy regeneration and storage during other locomotor activities of daily living, specifically stand-to-sit movements. Parameter identification and inverse dynamic simulations of subject-specific optimized biomechanical models were used to calculate the negative joint mechanical work and power while sitting down (i.e., the mechanical energy theoretically available for electrical energy regeneration). These joint mechanical energetics were then used to simulate a robotic exoskeleton being backdriven and regenerating energy. An empirical characterization of an exoskeleton was carried out using a joint dynamometer system and an electromechanical motor model to calculate the actuator efficiency and to simulate energy regeneration and storage with the exoskeleton parameters. The performance calculations showed that regenerating electrical energy during stand-to-sit movements provide small improvements in energy efficiency and battery-powered operating durations. In summary, this research involved the development and evaluation of environment classification and energy regeneration systems to improve the automated control and energy-efficient actuation of next-generation robotic leg prostheses and exoskeletons for real-world locomotor assistance

    StairNet: Visual Recognition of Stairs for Human-Robot Locomotion

    Full text link
    Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to create the StairNet initiative to support the development of new deep learning models for visual sensing and recognition of stairs, with an emphasis on lightweight and efficient neural networks for onboard real-time inference. In this study, we present an overview of the development of our large-scale dataset with over 515,000 manually labeled images, as well as our development of different deep learning models (e.g., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks) and training methods (e.g., supervised learning with temporal data and semi-supervised learning with unlabeled images) using our new dataset. We consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. We also deployed our models on custom-designed CPU-powered smart glasses. However, limitations in the embedded hardware yielded slower inference speeds of 1.5 seconds, presenting a trade-off between human-centered design and performance. Overall, we showed that StairNet can be an effective platform to develop and study new visual perception systems for human-robot locomotion with applications in exoskeleton and prosthetic leg control
    corecore